Meta Expands AI Infrastructure with Nvidia’s Blackwell and Rubin Platforms
Meta has cemented a strategic partnership with Nvidia to deploy next-generation AI hardware across its global data centers. The agreement includes shipments of Nvidia's Blackwell and Rubin GPUs, Grace CPUs, and future Vera CPUs, marking one of the largest AI infrastructure deployments to date.
The hyperscale rollout encompasses training systems, inference clusters, and networking upgrades. Meta plans to integrate Nvidia Spectrum-X Ethernet switches into its Facebook Open Switching System platform, creating an end-to-end AI-optimized architecture.
"No one deploys AI at Meta's scale," said Nvidia CEO Jensen Huang, highlighting the partnership's significance in powering recommendation systems for billions of users. The collaboration involves deep co-design of CPUs, GPUs, and networking components to push AI capabilities forward.
Meta CEO Mark Zuckerberg framed the initiative as foundational for delivering "personal superintelligence" globally. The company will deploy GB300-based systems as part of its long-term roadmap to build what it describes as the world's leading AI infrastructure.